Image token removal is an efficient augmentation strategy for reducing the cost of computing image features. However, this efficient augmentation strategy has been found to adversely affect the accuracy of CLIP-based training. We hypothesize that removing a large portion of image tokens may improperly discard the semantic content associated with a given text description, thus constituting an incorrect pairing target in CLIP training. To address this issue, we propose an attentive token removal approach for CLIP training, which retains tokens with a high semantic correlation to the text description. The correlation scores are computed in an online fashion using the EMA version of the visual encoder. Our experiments show that the proposed attentive masking approach performs better than the previous method of random token removal for CLIP training. The approach also makes it efficient to apply multiple augmentation views to the image, as well as introducing instance contrastive learning tasks between these views into the CLIP framework. Compared to other CLIP improvements that combine different pre-training targets such as SLIP and MaskCLIP, our method is not only more effective, but also much more efficient. Specifically, using ViT-B and YFCC-15M dataset, our approach achieves $43.9\%$ top-1 accuracy on ImageNet-1K zero-shot classification, as well as $62.7/42.1$ and $38.0/23.2$ I2T/T2I retrieval accuracy on Flickr30K and MS COCO, which are $+1.1\%$, $+5.5/+0.9$, and $+4.4/+1.3$ higher than the SLIP method, while being $2.30\times$ faster. An efficient version of our approach running $1.16\times$ faster than the plain CLIP model achieves significant gains of $+5.3\%$, $+11.3/+8.0$, and $+9.5/+4.9$ on these benchmarks.
translated by 谷歌翻译
Prompt learning is one of the most effective and trending ways to adapt powerful vision-language foundation models like CLIP to downstream datasets by tuning learnable prompt vectors with very few samples. However, although prompt learning achieves excellent performance over in-domain data, it still faces the major challenge of generalizing to unseen classes and domains. Some existing prompt learning methods tackle this issue by adaptively generating different prompts for different tokens or domains but neglecting the ability of learned prompts to generalize to unseen domains. In this paper, we propose a novel prompt learning paradigm that directly generates domain invariant prompt generalizable to unseen domains, called MetaPrompt. Specifically, a dual-modality prompt tuning network is proposed to generate prompts for inputs from both image and text modalities. More importantly, we propose a meta-learning-based prompt tuning algorithm that explicitly constrains the prompt tuned on a specific domain or class also to achieve good performance on another domain or class. Extensive experiments on 11 datasets for base-to-new generalization and four datasets for domain generalization demonstrate that our method consistently and significantly outperforms existing methods.
translated by 谷歌翻译
近年来,基于深度学习的模型在视频超分辨率(VSR)方面取得了显着性能,但是这些模型中的大多数不适用于在线视频应用程序。这些方法仅考虑失真质量,而忽略了在线应用程序的关键要求,例如低延迟和模型较低的复杂性。在本文中,我们专注于在线视频传输,其中需要VSR算法来实时生成高分辨率的视频序列。为了应对此类挑战,我们提出了一种基于一种新的内核知识转移方法,称为卷积核旁路移植物(CKBG)。首先,我们设计了一个轻巧的网络结构,该结构不需要将来的帧作为输入,并节省了缓存这些帧的额外时间成本。然后,我们提出的CKBG方法通过用``核移植物)''绕过原始网络来增强这种轻巧的基础模型,这些网络是包含外部预验证图像SR模型的先验知识的额外卷积内核。在测试阶段,我们通过将其转换为简单的单路结构来进一步加速移植的多支球网络。实验结果表明,我们提出的方法可以处理高达110 fps的在线视频序列,并且模型复杂性非常低和竞争性SR性能。
translated by 谷歌翻译
最近,人重新识别(REID)的隐私问题引起了越来越多的关注,并保留了REID方法使用的行人图像的隐私是必不可少的。去识别(DEID)方法通过删除与REID数据相关的身份来减轻隐私问题。但是,大多数现有的DEID方法倾向于删除所有与个人身份相关的信息,并损害REID任务上的识别数据的可用性。在本文中,我们旨在开发一种可以在REID人士的隐私保护和数据可用性之间实现良好权衡的技术。为了实现这一目标,我们提出了一种新颖的去识别方法,该方法是针对人雷德(Reid)明确设计的,命名人识别转移(PIS)。 PI在保留图像对之间的身份关系的同时,消除了行人图像中的绝对身份。通过利用变异自动编码器的插值属性,PI将每个行人图像从当前身份转移到具有新身份的另一个身份,从而导致图像仍然保留相对身份。实验结果表明,与现有的去识别方法相比,我们的方法在隐私保护和模型性能之间取决于更好的权衡,并且可以防御人类和模型攻击以确保数据隐私。
translated by 谷歌翻译
矢量图形(VG)在我们的日常生活中无处不在,在工程,建筑,设计等方面进行了广泛的应用。大多数现有方法的VG识别过程是首先将VG渲染为栅格图形(RG),然后基于行为识别。 RG格式。但是,此过程丢弃了几何结构并失去了VG的高分辨率。最近,提出了另一种类别的算法以直接从原始VG格式识别。但是它受RG渲染可以滤除的拓扑错误的影响。它不是查看一种格式,而是将VG和RG格式一起使用以避免这些缺点的好解决方案。此外,我们认为VG-TO-RG渲染过程对于有效组合VG和RG信息至关重要。通过指定有关如何将VG原语转移到RG像素的规则,渲染过程描述了VG和RG之间的相互作用和相关性。结果,我们提出了Rendnet,这是在2D和3D方案上识别的统一体系结构,该体系结构考虑VG/RG表示并通过结合VG-TO-RG栅格化过程来利用其相互作用。实验表明,Rendnet可以在各种VG数据集上的2D和3D对象识别任务上实现最新性能。
translated by 谷歌翻译
在本文中,我们考虑一种用于图像的不同数据格式:矢量图形。与广泛用于图像识别的光栅图形相比,由于文档中的基元的分析表示,矢量图形可以向上或向下缩放或向下扩展到任何分辨率而不进行别名或信息丢失的分辨率。此外,向量图形能够提供有关低级别元素组如何一起形成高级形状或结构的额外结构信息。图形矢量的这些优点尚未完全利用现有方法。要探索此数据格式,我们针对基本识别任务:对象本地化和分类。我们提出了一个有效的无CNN的管道,不会将图形呈现为像素(即光栅化),并将向量图形的文本文档作为输入,称为Yolat(您只查看文本)。 Yolat构建多图来模拟矢量图形中的结构和空间信息,并提出了双流图形神经网络来检测图表中的对象。我们的实验表明,通过直接在向量图形上运行,在平均精度和效率方面,Yolat Out-ut-Proped基于的物体检测基线。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
We aim to bridge the gap between our common-sense few-sample human learning and large-data machine learning. We derive a theory of human-like few-shot learning from von-Neuman-Landauer's principle. modelling human learning is difficult as how people learn varies from one to another. Under commonly accepted definitions, we prove that all human or animal few-shot learning, and major models including Free Energy Principle and Bayesian Program Learning that model such learning, approximate our theory, under Church-Turing thesis. We find that deep generative model like variational autoencoder (VAE) can be used to approximate our theory and perform significantly better than baseline models including deep neural networks, for image recognition, low resource language processing, and character recognition.
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure, meaning that certain parts of the state space move "fast" (and in a sense, are more influential) while other parts transition more "slowly." Such structure is common in real-world problems where sequential decisions need to be made at high frequencies, yet information that varies at a slower timescale also influences the optimal policy. Examples include: (1) service allocation for a multi-class queue with (slowly varying) stochastic costs, (2) a restless multi-armed bandit with an environmental state, and (3) energy demand response, where both day-ahead and real-time prices play a role in the firm's revenue. Models that fully capture these problems often result in MDPs with large state spaces and large effective time horizons (due to frequent decisions), rendering them computationally intractable. We propose an approximate dynamic programming algorithmic framework based on the idea of "freezing" the slow states, solving a set of simpler finite-horizon MDPs (the lower-level MDPs), and applying value iteration (VI) to an auxiliary MDP that transitions on a slower timescale (the upper-level MDP). We also extend the technique to a function approximation setting, where a feature-based linear architecture is used. On the theoretical side, we analyze the regret incurred by each variant of our frozen-state approach. Finally, we give empirical evidence that the frozen-state approach generates effective policies using just a fraction of the computational cost, while illustrating that simply omitting slow states from the decision modeling is often not a viable heuristic.
translated by 谷歌翻译